372 research outputs found

    Centering, Anaphora Resolution, and Discourse Structure

    Full text link
    Centering was formulated as a model of the relationship between attentional state, the form of referring expressions, and the coherence of an utterance within a discourse segment (Grosz, Joshi and Weinstein, 1986; Grosz, Joshi and Weinstein, 1995). In this chapter, I argue that the restriction of centering to operating within a discourse segment should be abandoned in order to integrate centering with a model of global discourse structure. The within-segment restriction causes three problems. The first problem is that centers are often continued over discourse segment boundaries with pronominal referring expressions whose form is identical to those that occur within a discourse segment. The second problem is that recent work has shown that listeners perceive segment boundaries at various levels of granularity. If centering models a universal processing phenomenon, it is implausible that each listener is using a different centering algorithm.The third issue is that even for utterances within a discourse segment, there are strong contrasts between utterances whose adjacent utterance within a segment is hierarchically recent and those whose adjacent utterance within a segment is linearly recent. This chapter argues that these problems can be eliminated by replacing Grosz and Sidner's stack model of attentional state with an alternate model, the cache model. I show how the cache model is easily integrated with the centering algorithm, and provide several types of data from naturally occurring discourses that support the proposed integrated model. Future work should provide additional support for these claims with an examination of a larger corpus of naturally occurring discourses.Comment: 35 pages, uses elsart12, lingmacros, named, psfi

    Limited Attention and Discourse Structure

    Full text link
    This squib examines the role of limited attention in a theory of discourse structure and proposes a model of attentional state that relates current hierarchical theories of discourse structure to empirical evidence about human discourse processing capabilities. First, I present examples that are not predicted by Grosz and Sidner's stack model of attentional state. Then I consider an alternative model of attentional state, the cache model, which accounts for the examples, and which makes particular processing predictions. Finally I suggest a number of ways that future research could distinguish the predictions of the cache model and the stack model.Comment: 9 pages, uses twoside,cl,lingmacro

    Inferring Acceptance and Rejection in Dialogue by Default Rules of Inference

    Full text link
    This paper discusses the processes by which conversants in a dialogue can infer whether their assertions and proposals have been accepted or rejected by their conversational partners. It expands on previous work by showing that logical consistency is a necessary indicator of acceptance, but that it is not sufficient, and that logical inconsistency is sufficient as an indicator of rejection, but it is not necessary. I show how conversants can use information structure and prosody as well as logical reasoning in distinguishing between acceptances and logically consistent rejections, and relate this work to previous work on implicature and default reasoning by introducing three new classes of rejection: {\sc implicature rejections}, {\sc epistemic rejections} and {\sc deliberation rejections}. I show how these rejections are inferred as a result of default inferences, which, by other analyses, would have been blocked by the context. In order to account for these facts, I propose a model of the common ground that allows these default inferences to go through, and show how the model, originally proposed to account for the various forms of acceptance, can also model all types of rejection.Comment: 37 pages, uses fullpage, lingmacros, name

    Inferring Narrative Causality between Event Pairs in Films

    Full text link
    To understand narrative, humans draw inferences about the underlying relations between narrative events. Cognitive theories of narrative understanding define these inferences as four different types of causality, that include pairs of events A, B where A physically causes B (X drop, X break), to pairs of events where A causes emotional state B (Y saw X, Y felt fear). Previous work on learning narrative relations from text has either focused on "strict" physical causality, or has been vague about what relation is being learned. This paper learns pairs of causal events from a corpus of film scene descriptions which are action rich and tend to be told in chronological order. We show that event pairs induced using our methods are of high quality and are judged to have a stronger causal relation than event pairs from Rel-grams
    • …
    corecore